Quick guides
English
Index
Log in to the cluster ↩
The accounts are for personal and non-transferable usage. If the project requires someone else's access to the machine or any increase in the assigned resources, the project manager will be in charge to make this type of request.
Login nodes ↩
Cluster | |
---|---|
MareNostrum 5 (GP) | glogin1.bsc.es glogin2.bsc.es glogin4.bsc.es (BSC Only) |
MareNostrum 5 (ACC) | alogin1.bsc.es alogin2.bsc.es alogin4.bsc.es (BSC Only) |
CTE-AMD | amdlogin1.bsc.es |
Huawei | hualogin1.bsc.es |
Nord3v2 | nord1.bsc.es nord2.bsc.es nord3.bsc.es nord4.bsc.es |
All connections must be done through SSH (Secure SHell), for example:
mylaptop$> ssh {username}@glogin1.bsc.es
Password change ↩
For security reasons, you must change the first password.
To change your password, you have to login at the Transfer machine:
mylaptop$> ssh {username}@transfer1.bsc.es
using the same username and password than in the cluster. Then, you have to use the 'passwd' command.
The new password will become effective after about 10 minutes.
Access from/to the outside ↩
The login nodes are the only nodes accessible from the outside, but no connections are allowed from the cluster to the outside world for security reasons.
All file transfers from/to the outside must be executed from your local machine and not within the cluster:
Example to copy files or directories from MN5 to an external machine:
mylaptop$> scp -r {username}@transfer1.bsc.es:"MN5_SOURCE_dir" "mylaptop_DEST_dir"
Example to copy files or directories from an external machine to MN5:
mylaptop$> scp -r "mylaptop_SOURCE_dir" {username}@transfer1.bsc.es:"MN5_DEST_dir"
Directories and file systems ↩
There are different partitions of disk space. Each area may have specific size limits and usage policies.
Basic directories under GPFS ↩
GPFS (General Parallel File System, a distributed networked filesystem) can be accessed from all the nodes and the Transfer machine (transfer1.bsc.es).
The available GPFS directories and file systems are:
/apps
: In this filesystem reside applications and libraries already installed on the machine for everyday use. Users cannot write to it./gpfs/home
: After logging in, this is the default work area where users can save source code, scripts, and other personal data. The space quota is individual based (with a relatively lessened capacity). Not recommended for running jobs; please run your jobs on your group’s /gpfs/projects or /gpfs/scratch instead./gpfs/projects
: It's intended for data sharing between users of the same group or project. All members of the group share the space quota./gpfs/scratch
: Similar to /gpfs/projects, but without backup. Used, for example, to store temporary job files during execution. All members of the group share the space quota.
Storage space limits/quotas ↩
To check the limits of disk space, as well as the quotas of current use for each file system:
$> bsc_quota
Running jobs ↩
Submit to queues ↩
Jobs submission to the queue system have to be done through the Slurm directives, for example:
To submit a job:
$> sbatch {job_script}
To show all the submitted jobs:
$> squeue
To cancel a job:
$> scancel {job_id}
Queue limits ↩
To check the limits for the queues (QoS) assigned to the project, you can run:
$> bsc_queues
Interactive jobs ↩
Interactive sessions
Allocation of an interactive session has to be done through Slurm, for example:
To request an interactive session on a compute node:
$> salloc -A {group} -q {queue} -n 1 -c 4 # example to request 1 task, 4 CPU (cores) per task
To request an interactive session on a non-shared (exclusive) compute node:
$> salloc -A {group} -q {queue} --exclusive
To request an interactive session on a compute node using GPUs:
$> salloc -A {group} -q {queue} -c 80 --gres=gpu:2 # example to request 80 CPU (cores) + 2 GPU